Goto

Collaborating Authors

 impersonate us


Bots Outperform Humans if They Impersonate Us

#artificialintelligence

"How can I help you?" "Hi, I'm calling to book a women's haircut for a client. "Sure, give me one second." "For what time are you looking for around?" The machine assistant never identified itself as a bot in the demo. And Google got a lot of flak for that. They later clarified that they would only launch the tech with "disclosure built in." But therein lies a dilemma, because a new study in the journal Nature Machine Intelligence suggests that a bot is most effective when it hides its machine identity. "That is, if it is allowed to pose as human." Talal Rahwan is a computational social scientist at New York University's campus in Abu Dhabi. His team recruited nearly 700 online volunteers to play the prisoner's dilemma--a classic game of negotiation, trust and deception--against either humans or bots. Half the time, the human players were told the truth about who they were matched up against. The other half, they were told they were playing a bot when they were actually playing a human or that they were battling a human when, in fact, it was only a bot. And the scientists found that bots actually did remarkably well in this game of negotiation--if they impersonated humans. "When the machine is reported to be human, it outperforms humans themselves.